In this paper we analyze several new methods for solving nonconvexoptimization problems with the objective function formed as a sum of two terms:one is nonconvex and smooth, and another is convex but simple and its structureis known. Further, we consider both cases: unconstrained and linearlyconstrained nonconvex problems. For optimization problems of the abovestructure, we propose random coordinate descent algorithms and analyze theirconvergence properties. For the general case, when the objective function isnonconvex and composite we prove asymptotic convergence for the sequencesgenerated by our algorithms to stationary points and sublinear rate ofconvergence in expectation for some optimality measure. Additionally, if theobjective function satisfies an error bound condition we derive a local linearrate of convergence for the expected values of the objective function. We alsopresent extensive numerical experiments for evaluating the performance of ouralgorithms in comparison with state-of-the-art methods.
展开▼